Goto

Collaborating Authors

 exit profile


On Efficiency in Hierarchical Reinforcement Learning

Neural Information Processing Systems

Aspects (2) and (3) have received empirical validation in a large number of papers, but not much theoretical analysis, except for some special cases, such as the work of Mann et al.


that (1) there is a bijection between state spaces and (2) through which the subMDPs have the same transition/reward

Neural Information Processing Systems

We thank all reviewers for spending their valuable time reviewing our paper. We now answer some specific question in detail. The definition of "equivalent subMDPs" (Definition 2) requires As discussed in the paper, (2) can be relaxed to similar transition/reward models. For the statistical efficiency results, this assumption could be relaxed, e.g. if a However, it is beyond the scope of this paper and we aim to address it in future work. We will add a more explicit discussion about the comparison to Mann et al. (2015) Theorem 1 in this paper is partially motivated by Osband et al. (2013); however, we consider a very different setting and Specifically, (1) Theorem 1 considers hierarchical structure while Osband et al.




Review for NeurIPS paper: On Efficiency in Hierarchical Reinforcement Learning

Neural Information Processing Systems

The justification relies on the Bayesian regret bounds to show that hierarchical decomposition can lead to statistically efficient learning by comparing the bounds for the "flat" MDP to the decomposed MDP, and thus deriving the following conditions for beneficial decomposition: either the subMDPs must all have a small state space or the original MDP is able to be decomposed into a small number of equivalent MDPs. The paper then goes on to discuss the computational complexity of planning with hierarchical structures with the Planning with Exit Profiles (PEP) algorithm. The authors derive a bound on the computational complexity of the PEP algorithm, which leads to the following properties being required for efficient learning: all subMDPs must be small, with a small number of exit profiles and total exit states. Finally, the paper also presents a bound on the performance of the PEP algorithm, and discusses the conditions for finding high-quality exit profiles, which is a requirement for the near-optimal performance of PEP.


Review for NeurIPS paper: On Efficiency in Hierarchical Reinforcement Learning

Neural Information Processing Systems

Quoting from the reviewers: R1: The paper presents a novel framework for analyzing potential efficiencies in reinforcement learning due to hierarchical structure in MDPs. This framework formally defines several useful concepts (subMDPs, equivalent subMDPs, exit states and exit profiles) that allow for an elegant refinement of regret bounds in a well-defined regime. The identification of particular properties (subMDPs, exit state set, and equivalence of subMDPs) provides a clear and useful framework for theoretical analysis of hierarchical reinforcement learning. Overall this paper provides an elegant, concrete framework for formalizing hierarchical structure and quantifying the efficiency such structure may allow. The paper provides a theoretical analysis of hierarchical reinforcement learning, deriving results on learning and planning efficiency when the reinforcement learning problem has repeated structure.